Quantum entropy-typical subspace and universal data compression
نویسندگان
چکیده
Quantum data compression is one of the most fundamental tasks in quantum information theory[1, 2]. Schumacher first provided a tight bound (equal to the von Neumann entropy of the source) to which quantum information may be compressed[3]. From then on, many compression schemes have been proposed[4–13]. In this paper we will consider the universal quantum data compression in the case that we only know the entropy of the source does not exceed some given value h. In classical information, an explicit example of such compression is a scheme based on the theory of types developed by Csiszar and Körner[14]. They showed that the data can be compressed to h bits/siganl by encoding all the sequences x for which H(px) ≤ h + ε(called CK sequence), where px is the type of x n and H(·) is the Shannon entropy function. For quantum information, an analogous theory was established[10] by Jozsa et al. They extended the classical CK sequence to a quantum subspace Ξ(B) for a given basis B, and then to Υ which is the span of Ξ(B) as B ranges over all bases. They proved that the dimension of Υ is up to some polynomial multiple of dimΞ(B), so the compression rate h is achievable asymptotically. We note that, their proof is based on the CK sequence set, so a natural question arises: Is the CK set essential to the proof? Or can it be replaced by a smaller set? In this paper, we give the answer. It will be shown that, if we replace the CK set with the entropy-typical set {xn : |H(px)− h| ≤ ε}, the proof still holds. This result is based on the quantum entropy-typical subspace theory which reveals that any ρ with entropy≤ h can be preserved by the entropy-typical subspace with entropy= h. Before presenting our main results, we begin with describing some basic concepts which will be used later. Let χ = {1, 2, ..., d} be a alphabet with d symbols. We use p = (p(1), p(2), ..., p(d)) to denote a probability distribution on χ, where p(a) is the probability of the symbol a. Let X1, X2, ..., Xn be a sequence of n symbols from χ. We will use the notation x to denote a sequence x1, x2, ..., xn. The type px of x n is the relative proportion of occurrences of each symbol in χ, i.e. px(a) = N(a|xn)/n for all
منابع مشابه
Reversible arithmetic coding for quantum data compression
We study the problem of compressing a block of symbols (a block quantum state) emitted by a memoryless quantum Bernoulli source. We present a simple-to-implement quantum algorithm for projecting, with high probability, the block quantum state onto the typical subspace spanned by the leading eigenstates of its density matrix. We propose a fixed-rate quantum Shannon–Fano code to compress the proj...
متن کاملar X iv : m at h - ph / 0 30 50 16 v 1 8 M ay 2 00 3 The von Neumann entropy and information rate for integrable quantum Gibbs ensembles , 2
This paper considers the problem of data compression for dependent quantum systems. It is the second in a series under the same title which began with [6] and continues with [12]. As in [6], we are interested in Lempel–Ziv encoding for quantum Gibbs ensembles. Here, we consider the canonical ideal lattice Boseand Fermi-ensembles. We prove that as in the case of the grand canonical ensemble, the...
متن کاملQuantum source coding and data compression
This lecture is intended to be an easily accessible first introduction to quantum information theory. The field is large and it is not completely covered even by the recent monograph [15]. Therefore the simple topic of data compression is selected to present some ideas of the theory. Classical information theory is not a prerequisite, we start with the basics of Shannon theory to give a feeling...
متن کاملMonotonicity of quantum relative entropy and recoverability
The relative entropy is a principal measure of distinguishability in quantum information theory, with its most important property being that it is non-increasing under noisy quantum operations. Here, we establish a remainder term for this inequality that quantifies how well one can recover from a loss of information by employing a rotated Petz recovery map. The main approach for proving this re...
متن کاملUniversal quantum information compression and degrees of prior knowledge
We describe a universal information compression scheme that compresses any pure quantum i.i.d. source asymptotically to its von Neumann entropy, with no prior knowledge of the structure of the source. We introduce a diagonalisation procedure that enables any classical compression algorithm to be utilised in a quantum context. Our scheme is then based on the corresponding quantum translation of ...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید
ثبت ناماگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید
ورودعنوان ژورنال:
- Quantum Information Processing
دوره 13 شماره
صفحات -
تاریخ انتشار 2014